explanation evaluation
Use-Case-Grounded Simulations for Explanation Evaluation
A growing body of research runs human subject evaluations to study whether providing users with explanations of machine learning models can help them with practical real-world use cases. However, running user studies is challenging and costly, and consequently each study typically only evaluates a limited number of different settings, e.g., studies often only evaluate a few arbitrarily selected model explanation methods. To address these challenges and aid user study design, we introduce Simulated Evaluations (SimEvals). SimEvals involve training algorithmic agents that take as input the information content (such as model explanations) that would be presented to the user, to predict answers to the use case of interest. The algorithmic agent's test set accuracy provides a measure of the predictiveness of the information content for the downstream use case. We run a comprehensive evaluation on three real-world use cases (forward simulation, model debugging, and counterfactual reasoning) to demonstrate that SimEvals can effectively identify which explanation methods will help humans for each use case. These results provide evidence that \simevals{} can be used to efficiently screen an important set of user study design decisions, e.g., selecting which explanations should be presented to the user, before running a potentially costly user study.
Use-Case-Grounded Simulations for Explanation Evaluation
A growing body of research runs human subject evaluations to study whether providing users with explanations of machine learning models can help them with practical real-world use cases. However, running user studies is challenging and costly, and consequently each study typically only evaluates a limited number of different settings, e.g., studies often only evaluate a few arbitrarily selected model explanation methods. To address these challenges and aid user study design, we introduce Simulated Evaluations (SimEvals). SimEvals involve training algorithmic agents that take as input the information content (such as model explanations) that would be presented to the user, to predict answers to the use case of interest. The algorithmic agent's test set accuracy provides a measure of the predictiveness of the information content for the downstream use case.
Robust Framework for Explanation Evaluation in Time Series Classification
Nguyen, Thu Trang, Nguyen, Thach Le, Ifrim, Georgiana
Time series classification is a task which deals with a prevalent data type, temporal sequences, common in domains such as human activity recognition, sports analytics and general healthcare. This paper provides a framework to quantitatively evaluate and rank explanation methods for time series classification. The recent interest in explanation methods for time series has provided a great variety of explanation techniques. Nevertheless, when the explanations disagree on a specific problem, it remains unclear which of them to use. Comparing multiple explanations to find the right answer is non-trivial. Two key challenges remain: how to quantitatively and robustly evaluate the informativeness of a given explanation method (i.e., relevance for the classification task), and how to compare explanation methods side-by-side. We propose AMEE, a robust Model-Agnostic Explanation Evaluation framework for evaluating and comparing multiple saliency-based explanations for time series classification. In this approach, data perturbation is added to the input time series guided by each explanation. The impact of perturbation on classification accuracy is then measured and used for explanation evaluation. The results show that perturbing discriminative parts of the time series leads to significant changes in classification accuracy which can be used to evaluate each explanation. To be robust to different types of perturbations and different types of classifiers, we aggregate the accuracy loss across perturbations and classifiers. This novel approach allows us to quantify and rank different explanation methods. We provide a quantitative and qualitative analysis for synthetic datasets, a variety of time-series datasets, as well as a real-world dataset with known expert ground truth.
Evaluating Explanation Without Ground Truth in Interpretable Machine Learning
Yang, Fan, Du, Mengnan, Hu, Xia
Interpretable Machine Learning (IML) has become increasingly important in many applications, such as autonomous cars and medical diagnosis, where explanations are preferred to help people better understand how machine learning systems work and further enhance their trust towards systems. Particularly in robotics, explanations from IML are significantly helpful in providing reasons for those adverse and inscrutable actions, which could impair the safety and profit of the public. However, due to the diversified scenarios and subjective nature of explanations, we rarely have the ground truth for benchmark evaluation in IML on the quality of generated explanations. Having a sense of explanation quality not only matters for quantifying system boundaries, but also helps to realize the true benefits to human users in real-world applications. To benchmark evaluation in IML, in this paper, we rigorously define the problem of evaluating explanations, and systematically review the existing efforts. Specifically, we summarize three general aspects of explanation (i.e., predictability, fidelity and persuasibility) with formal definitions, and respectively review the representative methodologies for each of them under different tasks. Further, a unified evaluation framework is designed according to the hierarchical needs from developers and end-users, which could be easily adopted for different scenarios in practice. In the end, open problems are discussed, and several limitations of current evaluation techniques are raised for future explorations.
- North America > United States > Texas > Brazos County > College Station (0.14)
- North America > Canada > Ontario > Toronto (0.14)
- Europe > Italy > Marche > Ancona Province > Ancona (0.04)
- Research Report (0.50)
- Overview (0.46)
- Health & Medicine (1.00)
- Transportation > Passenger (0.34)
- Transportation > Ground > Road (0.34)
- Information Technology > Robotics & Automation (0.34)